Rendering in solidThinking

This section describes how to render realistic images using the advanced shading techniques available in the renderThinking rendering module. solidThinking uses renderThinking as the main rendering engine but it is open to support other renderers, including RenderMan compliant renderers.

 

Included are details connected to shaded rendering: from scan-line rendering, use of shaders and shader libraries to texturing, materials, environment mapping and ray tracing.

 

Shading can produce images that are much easier to understand compared to line-drawn images, which are familiar to engineers but not to managers and other non-technical people. The shading of geometry according to its orientation to the light sources in a scene provides the viewer with important visual clues to the arrangement of the objects in three-dimensional space, even though they are rendered onto a two-dimensional image.

 

The simplest form of shading provided in renderThinking is Flat shading, where the orientation of a surface to the light source(s) is established, and the whole surface then shaded lighter if the surface is facing toward light sources, darker if facing away. Even though this method leads to images that appear to contain solid, three-dimensional objects, geometry is ultimately represented as flat polygons; therefore this simplistic method of shading results in images in which the facets of the geometry are apparent.

 

Two more sophisticated shading methods are Gouraud shading and Phong shading. The Gouraud method 'smoothes' the shading using a linear interpolation between the vertices, providing a good approximation of smooth matte surfaces. The Phong method, on the other hand, mimics the effects of highlights by interpolating the normal vector of the surface between the vertices, and then using this normal as the starting point for a new shading calculation at each pixel. This latter method gives a good approximation of smooth surfaces that can exhibit specular highlights.

 

Antialiasing techniques are also available to reduce the jagged appearance of silhouettes and texture patterns, by area-sampling for each pixel, so as to deduce an average color for each pixel which is representative of all of the geometry and patterning contained within the area of the pixel.

 

It is also possible attaching a texture to a surface during this rendering process, which is often a good technique for adding realism to a scene. This can be done in one of two ways: 1) an image can be scanned into an image file in some way, and then applied to a surface with a given orientation and (image texture mapping); 2) a texture can be defined in a mathematical way, so that a procedure can be used to establish what the correct color of a given point on the surface would be (procedural texture).

 

Mirror-like reflections are another way of making images look much more realistic. These can also be simulated in two ways. A first technique is similar to texture mapping and is called environment mapping: a virtual image created by the software is used to simulate the environment surrounding the geometry.

 

A second technique for handling reflectivity of surfaces is to use a technique known as ray tracing. This involves tracing the path of light rays from the view point as they bounce off the surface and travel toward other objects in the scene in order to establish the origin of each ray. It is possible to render a whole scene using only ray tracing, rather than any scan-line techniques: the disadvantage of such an approach, however, is that the technique is computationally intensive, and often slower than scan-line algorithms. In renderThinking it is possible to mix scan-line and ray-tracing techniques, using scan-line algorithms for visibility calculations and ray-tracing merely to handle reflections, refractions or shadows when required.

 

Any number of can be positioned within the three-dimensional scene along with the objects of interest, so as to produce shading and shadows which are representative of particular lighting conditions. These light sources can be specified as having particular intensities and colors, such as: point lights, which radiate light equally in all directions; distant lights for modeling sunlight; spot lights which produce a directional cone of light.

 

It is also possible to represent transparent materials, but which alter the color of objects seen through them, and may reflect a portion of the light incident on them, thus giving them a shiny appearance.

 

Patterns can be attached to objects which are matte or shiny, small undulations to the surface can be applied to any object, regardless of its color, reflectivity or transparency, and so on. A mechanism is provided whereby the overall appearance of an object is specified in terms of a 'material' definition.

 

A material definition is made up of several components, specifying such matters as the reflectivity, transparency, color, texture, and so on. These material definitions can be associated with any number of objects, and the individual shader components can be used in any number of material definitions, so that the appearance of all the geometry in a whole scene can quickly be built up from a library of shader components.

 

Radiosity based light simulation is also available in renderThinking: it gives a stunning level of realism due to its accurate simulation of diffuse lighting conditions.

renderThinking also provides a mechanism for performing post-processing operations on images once rendered. This is achieved by rendering into a screen buffer, upon which post-processing is done before outputting in the normal way.